A learning rule for very simple universal approximators consisting of a single layer of perceptrons

نویسندگان

  • Peter Auer
  • Harald Burgsteiner
  • Wolfgang Maass
چکیده

One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. For a long time one has thought that there exists no competitive learning algorithm for these extremely simple neural networks, which also came to be known as committee machines. It is commonly assumed that one has to replace the hard threshold gates on the hidden layer by sigmoidal gates (or RBF-gates) and that one has to tune the weights on at least two successive layers in order to achieve satisfactory learning results for any class of neural networks that yield universal approximators. We show that this assumption is not true, by exhibiting a simple learning algorithm for parallel perceptrons - the parallel delta rule (p-delta rule). In contrast to backprop for multi-layer perceptrons, the p-delta rule only has to tune a single layer of weights, and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for parallel perceptrons such as MADALINE. Obviously these features make the p-delta rule attractive as a biologically more realistic alternative to backprop in biological neural circuits, but also for implementations in special purpose hardware. We show that the p-delta rule also implements gradient descent-with regard to a suitable error measure-although it does not require to compute derivatives. Furthermore it is shown through experiments on common real-world benchmark datasets that its performance is competitive with that of other learning approaches from neural networks and machine learning. It has recently been shown [Anthony, M. (2007). On the generalization error of fixed combinations of classifiers. Journal of Computer and System Sciences 73(5), 725-734; Anthony, M. (2004). On learning a function of perceptrons. In Proceedings of the 2004 IEEE international joint conference on neural networks (pp. 967-972): Vol. 2] that one can also prove quite satisfactory bounds for the generalization error of this new learning rule.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Reducing Communication for Distributed Learning in Neural Networks

A learning algorithm is presented for circuits consisting of a single layer of perceptrons. We refer to such circuits as parallel perceptrons. In spite of their simplicity, these circuits are universal approximators for arbitrary boolean and continuous functions. In contrast to backprop for multi-layer perceptrons, our new learning algorithm – the parallel delta rule (p-delta rule) – only has t...

متن کامل

A Learning Rule for Universal Approximators with a Single Non-Linearity

A learning algorithm is presented for circuits consisting of a single soft winner-take-all or k-winner-take-all gate applied to linear sums. We show that for these circuits gradient descent with regard to a suitable error function does not run into the familiar credit-assignment-problem. Furthermore, in contrast to backprop for multi-layer perceptrons this learning algorithm does not require th...

متن کامل

On Boolean Combinations of Definitive Classifiers

We consider the sample complexity of concept learning when we classify by using a fixed Boolean function of the outputs of a number of different classifiers. Here, we take into account the ‘margins’ of each of the constituent classifiers. A special case is that in which the constituent classifiers are linear threshold functions (or perceptrons) and the fixed Boolean function is the majority fun...

متن کامل

Growing Layers of Perceptrons : Introducing the Extentron Algorithm

The ideas presented here are based on two observations of perceptrons: (1) when the perceptron learning algorithm cycles among hyperplanes, the hyperplanes may be compared to select one that gives a best split of the examples, and (2) it is always possible for the perceptron to build a hyper-plane that separates at least one example from all the rest. We describe the Extentron which grows multi...

متن کامل

Pulling back error to the hidden-node parameter technology: Single-hidden-layer feedforward network without output weight

According to conventional neural network theories, the feature of single-hidden-layer feedforward neural networks(SLFNs) resorts to parameters of the weighted connections and hidden nodes. SLFNs are universal approximators when at least the parameters of the networks including hidden-node parameter and output weight are exist. Unlike above neural network theories, this paper indicates that in o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural networks : the official journal of the International Neural Network Society

دوره 21 5  شماره 

صفحات  -

تاریخ انتشار 2008